Skip to content

Fix FD leak, use-after-free, and pipe cleanup in server#27

Open
Waskim2 wants to merge 3 commits intoEndPositive:mainfrom
Waskim2:fix/fd-leak-and-use-after-free
Open

Fix FD leak, use-after-free, and pipe cleanup in server#27
Waskim2 wants to merge 3 commits intoEndPositive:mainfrom
Waskim2:fix/fd-leak-and-use-after-free

Conversation

@Waskim2
Copy link
Copy Markdown

@Waskim2 Waskim2 commented Mar 21, 2026

Summary

  • Fix pipe FD leak: slipstream_server_free_stream_context() only closed the socket FD but never closed pipefd[0] and pipefd[1], leaking 2 file descriptors per stream. Over time this caused Bad file descriptor errors and connection failures.
  • Fix use-after-free in callback_close: server_ctx was freed via slipstream_server_free_context() before accessing server_ctx->thread_ctx on the next line, leading to potential crashes/segfaults.
  • Close pipe write-end on stream fin: When a stream finished, only the socket FD was closed but pipefd[1] remained open, causing the slipstream_io_copy thread to hang indefinitely on read().

Reproduction

Connect a SlipNet client to a slipstream server with SOCKS target. After ~3 minutes of active use, the server logs fill with:

connect() failed: Bad file descriptor
File descriptor XXX was closed
send failed: Broken pipe

The connection drops and health checks fail. FD count grows continuously until the process becomes unusable.

Test plan

  • Verified server runs stable for 15+ minutes under active client load (previously crashed after ~3 minutes)
  • Verified FD count stabilizes instead of growing unboundedly
  • Bad file descriptor errors no longer appear in logs during normal operation

Waskim2 and others added 3 commits March 21, 2026 17:01
- Close pipe FDs (pipefd[0], pipefd[1]) in free_stream_context,
  previously only the socket FD was closed causing 2 FDs to leak
  per stream
- Fix use-after-free in callback_close: server_ctx was freed
  before accessing server_ctx->thread_ctx
- Close pipe write-end on stream fin so io_copy thread exits
  cleanly instead of hanging indefinitely

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
slipstream_server_poller threads were created on every
prepare_to_send callback with no data available, without
checking if one was already active for the stream. This caused
hundreds of threads to accumulate over time, exhausting
resources and eventually crashing the server.

Add polling_active flag to stream context to ensure only one
poller thread exists per stream at a time.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Stream contexts were never freed in most cases because
picoquic_reset_stream() triggered a callback with stream_ctx=NULL,
so the reset handler couldn't clean up.

Now free_stream_context is called directly before reset in all
error/close paths in prepare_to_send and stream_data callbacks:
- ioctl failure
- recv returning 0 (connection closed)
- recv returning -1 (error)
- write EPIPE (pipe closed)
- write error

This was the main source of FD/thread accumulation causing the
server to become unresponsive after extended use.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant